Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Welding ball edge bubble segmentation for ball grid array based on full convolutional network and K-means clustering
ZHAO Ruixiang, HOU Honghua, ZHANG Pengcheng, LIU Yi, TIAN Zhu, GUI Zhiguo
Journal of Computer Applications    2019, 39 (9): 2580-2585.   DOI: 10.11772/j.issn.1001-9081.2019030523
Abstract395)      PDF (1006KB)(352)       Save

For inaccurate segmentation results caused by the existence of edge bubbles in welding balls and the grayscale approximation of background due to the diversity of image interference factors in Ball Grid Array (BGA) bubble detection, a welding ball bubble segmentation method based on Fully Convolutional Network (FCN) and K-means clustering was proposed. Firstly, a FCN network was constructed based on the BGA label dataset, and trained to obtain an appropriate network model, and then the rough segmentation result of the image were obtained by predicting and processing the BGA image to be detected. Secondly, the welding ball region mapping was extracted, the bubble region identification was improved by homomorphic filtering method, and then the image was subdivided by K-means clustering segmentation to obtain the final segmentation result. Finally, the welding balls and bubble region in the original image were labeled and identified. Comparing the proposed algorithm with the traditional BGA bubble segmentation algorithm, the experimental results show that the proposed algorithm can segment the edge bubbles of complex BGA welding balls accurately, and the image segmentation results highly match the true contour with higher accuracy.

Reference | Related Articles | Metrics
Industrial X-ray image enhancement algorithm based on gradient field
ZHOU Chong, LIU Huan, ZHAO Ailing, ZHANG Pengcheng, LIU Yi, GUI Zhiguo
Journal of Computer Applications    2019, 39 (10): 3088-3092.   DOI: 10.11772/j.issn.1001-9081.2019040694
Abstract499)      PDF (843KB)(290)       Save
In the detection of components with uneven thickness by X-ray, the problems of low contrast or uneven contrast and low illumination often occur, which make it difficult to observe and analyze some details of components in the images obtained. To solve this problem, an X-ray image enhancement algorithm based on gradient field was proposed. The algorithm takes gradient field enhancement as the core and is divided into two steps. Firstly, an algorithm based on logarithmic transformation was proposed to compress the gray range of an image, remove redundant gray information of the image and improve image contrast. Then, an algorithm based on gradient field was proposed to enhance image details, improve local image contrast and image quality, so that the details of components were able to be clearly displayed on the detection screen. A group of X-ray images of components with uneven thickness were selected for experiments, and the comparisons with algorithms such as Contrast Limited Adaptive Histogram Equalization (CLAHE) and homomorphic filtering were carried out. Experimental results show that the proposed algorithm has more obvious enhancement effect and can better display the detailed information of the components. The quantitative evaluation criteria of calculating average gradient and No-Reference Structural Sharpness (NRSS) texture analysis further demonstrate the effectiveness of this algorithm.
Reference | Related Articles | Metrics
Stationary wavelet domain deep residual convolutional neural network for low-dose computed tomography image estimation
GAO Jingzhi, LIU Yi, BAI Xu, ZHANG Quan, GUI Zhiguo
Journal of Computer Applications    2018, 38 (12): 3584-3590.   DOI: 10.11772/j.issn.1001-9081.2018040833
Abstract405)      PDF (1168KB)(317)       Save
Concerning the problem of a large amount of noise in Low-Dose Computed Tomography (LDCT) reconstructed images, a deep residual Convolutional Neural Network for Stationary Wavelet Transform (SWT-CNN) model was proposed to estimate Normal-Dose Computed Tomography (NDCT) image from LDCT image. In training phase, the high-frequency coefficients of LDCT images after Stationary Wavelet Transform (SWT) three-level decomposition were taken as inputs, the residual coefficients were obtained by subtracting the high-frequency coefficients of NDCT images from high-frequency coefficients of LDCT images were taken as labels, and the mapping relationship between inputs and labels could be learned by deep CNN. In testing phase, the high-frequency coefficients of NDCT image could be predicted from the high-frequency coefficients of LDCT image by using this mapping relationship. Finally, the predicted NDCT image could be reconstructed by Stationary Wavelet Inverse Transform (ISWT). With the size of 512 x 512, 50 pairs of normal-dose chest and abdominal scan sections of the same phantom and reconstructed images with noise added to the projection field were used as data sets, of which 45 pairs constituted a training set and the remaining 5 pairs constituted a test set. The SWT-CNN model was compared with the-state-of-the-art methods, such as Non-Local Means (NLM), K-Singular Value Decomposition (K-SVD) algorithm, Block-Matching and 3D filtering (BM3D), and Image domain CNN (Image-CNN). The experimental results show that, the Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) of NDCT image predicted by SWT-CNN model are higher, and its Root Mean Square Error (RMSE) is smaller than that of other algorithms. The proposed model is feasible and effective in improving the quality of low-dose CT images.
Reference | Related Articles | Metrics
Segmentation of cervical nuclei based on fully convolutional network and conditional random field
LIU Yiming, ZHANG Pengcheng, LIU Yi, GUI Zhiguo
Journal of Computer Applications    2018, 38 (11): 3348-3354.   DOI: 10.11772/j.issn.1001-9081.2018050988
Abstract1010)      PDF (1095KB)(829)       Save
Aiming at the problem of inaccurate cervical nuclei segmentation due to complex and diverse shape in cervical cancer screening, a new method that combined Fully Convolutional Network (FCN) and dense Conditional Random Field (CRF) was proposed for nuclei segmentation. Firstly, a Tiny-FCN (T-FCN) was built according to the characteristics of the Herlev data set. Utilizing the priori information at the pixel level of the nucleus region, the multi-level features were learned autonomously to obtain the rough segmentation of the cell nucleus. Then, the small incorrect segmentation regions in the rough segmentation were eliminated and the segmentation was refined, by minimizing the energy function of the dense CRF that contains the label, intensity and position information of all pixels in a cell image. The experiment results on Herlev Pap smear dataset show that the precision, recall and Zijdenbos Similarity Index (ZSI) are all higher than 0.9, indicating that the nuclei segmentation boundary obtained by the proposed method is matched excellently with the ground truth, and the segmentation is accurate. Compared to the traditional method in which the indexes of segmentation of abnormal nuclei are lower than those of normal nuclei, the segmentation indexes of abnormal nuclei are superior to those of normal nulei by the proposed method.
Reference | Related Articles | Metrics
Combination of improved diffusion and bilateral filtering for low-dose CT reconstruction
ZHANG Pengcheng, ZHANG Quan, ZHANG Fang, CHEN Yan, HAN Jianning, HAO Huiyan, GUI Zhiguo
Journal of Computer Applications    2016, 36 (4): 1100-1105.   DOI: 10.11772/j.issn.1001-9081.2016.04.1100
Abstract482)      PDF (973KB)(403)       Save
Median Prior (MP) reconstruction algorithm combined with nonlocal means fuzzy diffusion and extended neighborhood bilateral filter was proposed to reduce the streak artifacts in low-dose Computed Tomography (CT) reconstruction. In the new algorithm, the nonlocal means fuzzy diffusion method was used to improve the median of the prior distribution Maximum A Posterior (MAP) reconstruction algorithm at first, which reduced the noise in the reconstruction image; then, the bilateral filtering method based on the expended neighborhood was applied to preserve the edges and details of the reconstruction image and improve the Signal-to-Noise Ratio (SNR). The Shepp-Logan model and the thorax phantom were used to test the effectiveness of the proposed algorithm. The experimental results show that the proposed method has the smaller values of the Normalized Mean Square Distance (NMSD) and Mean Absolute Error (MAE) and the highest SNR (10.20 dB and 15.51 dB, respectively) in the two experiment images, compared with Filtered Back Projection (FBP), Median Root Prior (MRP), NonLocal Mean MP (NLMMP) and NonLocal Mean Bilateral Filter MP (NLMBFMP) algorithms. The experimental results show that the proposed reconstruction algorithm can reduce noise while keeping the edges and details of the image, which improves the deterioration problem of the low-dose CT image and obtains the image with higher SNR and quality.
Reference | Related Articles | Metrics
Statistical iterative algorithm based on adaptive weighted total variation for low-dose CT
HE Lin, ZHANG Quan, SHANGGUAN Hong, ZHANG Wen, ZHANG Pengcheng, LIU Yi, GUI Zhiguo
Journal of Computer Applications    2016, 36 (10): 2916-2921.   DOI: 10.11772/j.issn.1001-9081.2016.10.2916
Abstract459)      PDF (888KB)(405)       Save
Concerning the streak artifacts and impulse noise of the Low-Dose Computed Tomography (LDCT) reconstructed images, a statistical iterative reconstruction method based on adaptive weighted Total Variation (TV) for LDCT was presented. Considering the shortage that traditional TV may bring staircase effect while suppressing streak artifacts, an adaptive weighted TV model that combined the weighting factor based on weighted variation and TV model was proposed. Then, the new model was applied to the Penalized Weighted Least Square (PWLS). Different areas of the image were processed with different de-noising intensities, so as to achieve a good effect of noise suppression and edge preservation. The Shepp-Logan model and the digital pelvis phantom were used to test the effectiveness of the proposed algorithm. Experimental results show that the proposed method has smaller Normalized Mean Square Distance (NMSD) and Normal Average Absolute Distance (NAAD) in the two experiment images, compared with the Filtered Back Projection (FBP), PWLS, PWLS-Median Prior (PWLS-MP) and PWLS-TV algorithms. Meanwhile, the proposed method get Peak Signal-To-Noise Ratio (PSNR) of 40.91 dB and 42.25 dB respectively. Experimental results show that the proposed algorithm can well preserve image details and edges, while eliminating streak artifacts effectively.
Reference | Related Articles | Metrics
Adaptive total generalized variation denoising algorithm for low-dose CT images
HE Lin, ZHANG Quan, SHANGGUAN Hong, ZHANG Fang, ZHANG Pengcheng, LIU Yi, SUN Weiya, GUI Zhiguo
Journal of Computer Applications    2016, 36 (1): 243-247.   DOI: 10.11772/j.issn.1001-9081.2016.01.0243
Abstract463)      PDF (796KB)(413)       Save
A new denoising algorithm, Adaptive Total Generalized Variation (ATGV), was proposed for removing streak artifacts within the reconstructed image of low-dose Computed Tomography (CT). Considering the shortage that the traditional Total Generalized Variation (TGV) would blur the edge details, the intuitionistic fuzzy entropy which can distinguish the smooth and detail regions was introduced into the TGV algorithm. Different areas of the image were processed with different denoising intensities. As a result, the image details could be well preserved. Firstly, the Filtered Back Projection (FBP) algorithm was used to obtain a reconstructed image. Secondly, the edge indicator function based on intuitive fuzzy entropy was applied to improve the TGV algorithm. Finally, the new algorithm was employed to reduce the noise in the reconstructed image. The simulations of the low-dose CT image reconstruction for the Shepp-Logan model and the thorax phantom were used to test the effectiveness of the proposed algorithm. The experimental results show that the proposed algorithm has the smaller values of the Normalized Mean Square Distance (NMSD) and Normalized Average Absolute Distance (NAAD) in the two experiment images, compared with the Total Variation (TV) algorithm and TGV algorithm. Meanwhile, the two experiment images processed with the new method can obtain high Peak Signal-to-Noise Ratios (PSNR) of 26.90 dB and 44.58 dB, respectively. So the proposed algorithm can effectively preserve image details and edges, while reducing streak artifacts.
Reference | Related Articles | Metrics
Topic evolution in text stream based on feature ontology
CHEN Qian, GUI Zhiguo, GUO Xin, XIANG Yang
Journal of Computer Applications    2015, 35 (2): 456-460.   DOI: 10.11772/j.issn.1001-9081.2015.02.0456
Abstract515)      PDF (886KB)(379)       Save

In the era of big data, research in topic evolution is mostly based on the classical probability topic model, the premise of word bag hypothesis leads to the lack of semantic in topic and the retrospective process in analyzing evolution. An online incremental feature ontology based topic evolution algorithm was proposed to tackle these problems. First of all, feature ontology was built based on word co-occurrence and general WordNet ontology base, with which the topic in text stream was modeled. Secondly, a text stream topic matrix construction algorithm was put forward to realize online incremental topic evolution analysis. Finally, a text topic ontology evolution diagram construction algorithm was put forward based on the text steam topic matrix, and topic similarity was computed using sub-graph similarity calculation, thus the evolution of topics in text stream was obtained with time scale. Experiments on scientific literature showed that the proposed algorithm reduced time complexity to O(nK+N), which outperformed classical probability topic evolution model, and performed no worse than sliding-window based Latent Dirichlet Allocation (LDA). With ontology introduced, as well as the semantic relations, the proposed algorithm can demonstrate the semantic feature of topics in graphics, based on which the topic evolution diagram is built incrementally, thus has more advantages in semantic explanatory and topic visualization.

Reference | Related Articles | Metrics
Symmetry optimization of polar coordinate back-projection reconstruction algorithm for fan beam CT
ZHANG Jing ZHANG Quan LIU Yi GUI Zhiguo
Journal of Computer Applications    2014, 34 (6): 1711-1714.   DOI: 10.11772/j.issn.1001-9081.2014.06.1711
Abstract358)      PDF (592KB)(295)       Save

To improve the speed of image reconstruction based on fan-beam Filtered Back Projection (FBP), a new optimized fast reconstruction method was proposed for polar back-projection algorithm. According to the symmetry feature of trigonometric function, the preprocessing projection datum were back-projected on the polar coordinates at the same time. During the back-projection data coordinate transformation, the computation of bilinear interpolation could be reduced by using the symmetry of the pixel position parameters. The experimental result shows that, compared with the traditional convolution back-projection algorithm, the speed of reconstruction can be improved more than eight times by the proposed method without sacrificing image quality. The new method is also applicable to 3D cone-beam reconstruction, and can be extended to multilayer spiral three-dimensional reconstruction.

Reference | Related Articles | Metrics
High quality positron emission tomography reconstruction algorithm based on correlation coefficient and forward-and-backward diffusion
SHANG Guanhong LIU Yi ZHANG Quan GUI Zhiguo
Journal of Computer Applications    2014, 34 (5): 1482-1485.   DOI: 10.11772/j.issn.1001-9081.2014.05.1482
Abstract234)      PDF (752KB)(349)       Save

In Positron Emission Tomography (PET) computed imaging, traditional iterative algorithms have the problem of details loss and fuzzy object edges. A high quality Median Prior (MP) reconstruction algorithm based on correlation coefficient and Forward-And-Backward (FAB) diffusion was proposed to solve the problem in this paper. Firstly, a characteristic factor called correlation coefficient was introduced to represent the image local gray information. Then through combining the correlation coefficient and forward-and-backward diffusion model, a new model was made up. Secondly, considering that the forward-and-backward diffusion model has the advantages of dealing with background and edge separately, the proposed model was applied to Maximum A Posterior (MAP) reconstruction algorithm of the median prior distribution, thus a median prior reconstruction algorithm based on forward-and-backward diffusion was obtained. The simulation results show that, the new algorithm can remove the image noise while preserving object edges well. The Signal-to-Noise Ratio (SNR) and Root Mean Squared Error (RMSE) also show visually the improvement of the reconstructed image quality.

Reference | Related Articles | Metrics
MLEM low-dose CT reconstruction algorithm based on variable exponent anisotropic diffusion and non-locality
ZHANG Fang CUI Xueying ZHANG Quan DONG Chanchan SUN Weiya BAI Yunjiao GUI Zhiguo
Journal of Computer Applications    2014, 34 (12): 3605-3608.  
Abstract202)      PDF (803KB)(639)       Save

Concerning the serious recession problems of the low-dose Computed Tomography (CT) reconstruction images, a low-dose CT reconstruction method of MLEM based on non-locality and variable exponent was presented. Considering the traditional anisotropic diffusion noise reduction is insufficient, variable exponent which could effectively compromise between heat conduction and anisotropic diffusion P-M models, and the similarity function which could detect the edge and details instead of gradient were applied to the traditional anisotropic diffusion, so as to achieve the desired effect. In each iteration, firstly, the basic MLEM algorithm was used to reconstruct the low-dose projection data. And then the diffusion function was improved by the non-local similarity measure, variable index and fuzzy mathematics theory, and the improved anisotropic diffusion was used to denoise the reconstructed image. Finally median filter was used to eliminate impulse noise points in the image. The experimental results show the proposed algorithm has a smaller numerical value than OS-PLS (Ordered Subsets-Penalized Least Squares), OS-PML-OSL (Ordered Subsets-Penalized Maximum Likelihood-One Step Late), and the algorithm based on the traditional PM, in the variance of Mean Absolute Error (MAE), and Normalized Mean Square Distance (NMSD), especially its Signal-to-Noise Ratio (SNR) is up to 10.52. This algorithm can effectively eliminate the bar of artifacts, and can keep image edges and details information better.

Reference | Related Articles | Metrics
Patch similarity anisotropic diffusion algorithm based on variable exponent for image denoising
DONG Chanchan ZHANG Quan HAO Huiyan ZHANG Fang LIU Yi SUN Weiya GUI Zhiguo
Journal of Computer Applications    2014, 34 (10): 2963-2966.   DOI: 10.11772/j.issn.1001-9081.2014.10.2963
Abstract238)      PDF (815KB)(341)       Save

Concerning the contradiction between edge-preserving and noise-suppressing in the process of image denoising, a patch similarity anisotropic diffusion algorithm based on variable exponent for image denoising was proposed. The algorithm combined adaptive Perona-Malik (PM) model based on variable exponent for image denoising and the idea of patch similarity, constructed a new edge indicator and a new diffusion coefficient function. The traditional anisotropic diffusion algorithms for image denoising based on the intensity similarity of each single pixel (or gradient information) to detect edge cannot effectively preserve weak edges and details such as texture. However, the proposed algorithm can preserve more detail information while removing the noise, since the algorithm utilizes the intensity similarity of neighbor pixels. The simulation results show that, compared with the traditional image denoising algorithms based on Partial Differential Equation (PDE), the proposed algorithm improves Signal-to-Noise ratio (SNR) and Peak-Signal-to-Noise Ratio (PSNR) to 16.602480dB and 31.284672dB respectively, and enhances anti-noise capability. At the same time, the filtered image preserves more detail features such as weak edges and textures and has good visual effects. Therefore, the algorithm achieves a good balance between noise reduction and edge maintenance.

Reference | Related Articles | Metrics
Fuzzy diffusion PET reconstruction algorithm based on anatomical non-local means prior
SHANG Guanhong LIU Yi ZHANG Quan GUI Zhiguo
Journal of Computer Applications    2013, 33 (09): 2627-2630.   DOI: 10.11772/j.issn.1001-9081.2013.09.2627
Abstract748)      PDF (608KB)(397)       Save
A fuzzy diffusion Positron Emission Tomography (PET) reconstruction algorithm based on anatomical non-local means prior was proposed to solve the problem in traditional Maximum A Posteriori (MAP) algorithm, that the details at low gradient value of reconstruction image cannot be maintained effectively and the appeared ladder artifacts. Firstly, the median prior distribution MAP reconstruction algorithm was improved, namely an anisotropic diffusion filter combined with fuzzy function was introduced before each median filtering. Secondly, the fuzzy membership function was used as diffusion coefficient in the anisotropic diffusion process, and the details of the image were considered by anatomical non-local prior information. The simulation results show that, compared with the traditional algorithms, the new algorithm improves the Signal-to-Noise Ratio (SNR) and anti-noise capability, and has good visual effects and clear edges. Thus the algorithm achieves a good balance between noise reduction and edge maintenance.
Related Articles | Metrics
Non-local means denoising approach based on dictionary learning
CUI Xueying ZHANG Quan GUI Zhiguo
Journal of Computer Applications    2013, 33 (05): 1420-1422.   DOI: 10.3724/SP.J.1087.2013.01420
Abstract872)      PDF (529KB)(581)       Save
Concerning the measurement of the similarity of non-local means, a method based on dictionary learning was presented. First, block matching based local pixel grouping was used to eliminate the interference by dissimilar image blocks. Then, the corrupted similar blocks were denoised by dictionary learning. As a further development of classical sparse representation model, the similar patches were unified for joint sparse representation and learning an efficient and compact dictionary by principal component analysis, so that the similar patches relevency could be well preserved. This similarity between the pixels was measured by the Euclidean distance of denoised image blocks,which can well show the similarity of the similar blocks. The experimental results show the modified algorithm has a superior denoising performance than the original one in terms of both Peak Signal-to-Noise Ratio (PSNR) and subjective visual quality. For some images whose structural similarity is large and with rich detail information, their structures and details are well preserved. The robustness of the presented method is superior to the original one.
Reference | Related Articles | Metrics
High quality ordered subset expectation maximization reconstruction algorithm based on multi-resolution for PET images
ZHANG Quan FU Xuejing LI Xiaohong GUI Zhiguo
Journal of Computer Applications    2013, 33 (03): 648-650.   DOI: 10.3724/SP.J.1087.2013.00648
Abstract844)      PDF (633KB)(520)       Save
In Positron Emission Tomography (PET) imaging, Maximum Likelihood Expectation Maximization (MLEM) algorithm cannot be directly applied to clinical diagnosis due to suppressing noise ineffectively and converging slowly. Although Ordered Subset Expectation Maximization (OSEM) algorithm converges fast, it will lead to a significant decline in the quality of the reconstructed image. To address this problem, multi-resolution technology was introduced into the subset of the OSEM reconstruction algorithm to suppress noise and stabilize solving process. The experimental results indicate that the new algorithm overcomes the drawback of the traditional algorithm on degrading the reconstructed image and has the advantage of fast convergence. The proposed reconstruction algorithm can obtain a higher Signal-to-Noise Ratio (SNR) and a superior visual effect.
Reference | Related Articles | Metrics